224 research outputs found
A unified view on weakly correlated recurrent networks
The diversity of neuron models used in contemporary theoretical neuroscience
to investigate specific properties of covariances raises the question how these
models relate to each other. In particular it is hard to distinguish between
generic properties and peculiarities due to the abstracted model. Here we
present a unified view on pairwise covariances in recurrent networks in the
irregular regime. We consider the binary neuron model, the leaky
integrate-and-fire model, and the Hawkes process. We show that linear
approximation maps each of these models to either of two classes of linear rate
models, including the Ornstein-Uhlenbeck process as a special case. The classes
differ in the location of additive noise in the rate dynamics, which is on the
output side for spiking models and on the input side for the binary model. Both
classes allow closed form solutions for the covariance. For output noise it
separates into an echo term and a term due to correlated input. The unified
framework enables us to transfer results between models. For example, we
generalize the binary model and the Hawkes process to the presence of
conduction delays and simplify derivations for established results. Our
approach is applicable to general network structures and suitable for
population averages. The derived averages are exact for fixed out-degree
network architectures and approximate for fixed in-degree. We demonstrate how
taking into account fluctuations in the linearization procedure increases the
accuracy of the effective theory and we explain the class dependent differences
between covariances in the time and the frequency domain. Finally we show that
the oscillatory instability emerging in networks of integrate-and-fire models
with delayed inhibitory feedback is a model-invariant feature: the same
structure of poles in the complex frequency plane determines the population
power spectra
The variability of tidewater-glacier calving: origin of event-size and interval distributions
Calving activity at the termini of tidewater glaciers produces a wide range
of iceberg sizes at irregular intervals. We present calving-event data obtained
from continuous observations of the termini of two tidewater glaciers on
Svalbard, and show that the distributions of event sizes and inter-event
intervals can be reproduced by a simple calving model focusing on the mutual
interplay between calving and the destabilization of the glacier terminus. The
event-size distributions of both the field and the model data extend over
several orders of magnitude and resemble power laws. The distributions of
inter-event intervals are broad, but have a less pronounced tail. In the model,
the width of the size distribution increases with the calving susceptibility of
the glacier terminus, a parameter measuring the effect of calving on the stress
in the local neighborhood of the calving region. Inter-event interval
distributions, in contrast, are insensitive to the calving susceptibility.
Above a critical susceptibility, small perturbations of the glacier result in
ongoing self-sustained calving activity. The model suggests that the shape of
the event-size distribution of a glacier is informative about its proximity to
this transition point. Observations of rapid glacier retreats can be explained
by supercritical self-sustained calving
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Frequency dependence of signal power and spatial reach of the local field potential
The first recording of electrical potential from brain activity was reported
already in 1875, but still the interpretation of the signal is debated. To take
full advantage of the new generation of microelectrodes with hundreds or even
thousands of electrode contacts, an accurate quantitative link between what is
measured and the underlying neural circuit activity is needed. Here we address
the question of how the observed frequency dependence of recorded local field
potentials (LFPs) should be interpreted. By use of a well-established
biophysical modeling scheme, combined with detailed reconstructed neuronal
morphologies, we find that correlations in the synaptic inputs onto a
population of pyramidal cells may significantly boost the low-frequency
components of the generated LFP. We further find that these low-frequency
components may be less `local' than the high-frequency LFP components in the
sense that (1) the size of signal-generation region of the LFP recorded at an
electrode is larger and (2) that the LFP generated by a synaptically activated
population spreads further outside the population edge due to volume
conduction
The correlation structure of local cortical networks intrinsically results from recurrent dynamics
The co-occurrence of action potentials of pairs of neurons within short time
intervals is known since long. Such synchronous events can appear time-locked
to the behavior of an animal and also theoretical considerations argue for a
functional role of synchrony. Early theoretical work tried to explain
correlated activity by neurons transmitting common fluctuations due to shared
inputs. This, however, overestimates correlations. Recently the recurrent
connectivity of cortical networks was shown responsible for the observed low
baseline correlations. Two different explanations were given: One argues that
excitatory and inhibitory population activities closely follow the external
inputs to the network, so that their effects on a pair of cells mutually
cancel. Another explanation relies on negative recurrent feedback to suppress
fluctuations in the population activity, equivalent to small correlations. In a
biological neuronal network one expects both, external inputs and recurrence,
to affect correlated activity. The present work extends the theoretical
framework of correlations to include both contributions and explains their
qualitative differences. Moreover the study shows that the arguments of fast
tracking and recurrent feedback are not equivalent, only the latter correctly
predicts the cell-type specific correlations
Rate Dynamics of Leaky Integrate-and-Fire Neurons with Strong Synapses
Firing-rate models provide a practical tool for studying the dynamics of trial- or population-averaged neuronal signals. A wealth of theoretical and experimental studies has been dedicated to the derivation or extraction of such models by investigating the firing-rate response characteristics of ensembles of neurons. The majority of these studies assumes that neurons receive input spikes at a high rate through weak synapses (diffusion approximation). For many biological neural systems, however, this assumption cannot be justified. So far, it is unclear how time-varying presynaptic firing rates are transmitted by a population of neurons if the diffusion assumption is dropped. Here, we numerically investigate the stationary and non-stationary firing-rate response properties of leaky integrate-and-fire neurons receiving input spikes through excitatory synapses with alpha-function shaped postsynaptic currents for strong synaptic weights. Input spike trains are modeled by inhomogeneous Poisson point processes with sinusoidal rate. Average rates, modulation amplitudes, and phases of the period-averaged spike responses are measured for a broad range of stimulus, synapse, and neuron parameters. Across wide parameter regions, the resulting transfer functions can be approximated by a linear first-order low-pass filter. Below a critical synaptic weight, the cutoff frequencies are approximately constant and determined by the synaptic time constants. Only for synapses with unrealistically strong weights are the cutoff frequencies significantly increased. To account for stimuli with larger modulation depths, we combine the measured linear transfer function with the nonlinear response characteristics obtained for stationary inputs. The resulting linear–nonlinear model accurately predicts the population response for a variety of non-sinusoidal stimuli
The effect of heterogeneity on decorrelation mechanisms in spiking neural networks: a neuromorphic-hardware study
High-level brain function such as memory, classification or reasoning can be
realized by means of recurrent networks of simplified model neurons. Analog
neuromorphic hardware constitutes a fast and energy efficient substrate for the
implementation of such neural computing architectures in technical applications
and neuroscientific research. The functional performance of neural networks is
often critically dependent on the level of correlations in the neural activity.
In finite networks, correlations are typically inevitable due to shared
presynaptic input. Recent theoretical studies have shown that inhibitory
feedback, abundant in biological neural networks, can actively suppress these
shared-input correlations and thereby enable neurons to fire nearly
independently. For networks of spiking neurons, the decorrelating effect of
inhibitory feedback has so far been explicitly demonstrated only for
homogeneous networks of neurons with linear sub-threshold dynamics. Theory,
however, suggests that the effect is a general phenomenon, present in any
system with sufficient inhibitory feedback, irrespective of the details of the
network structure or the neuronal and synaptic properties. Here, we investigate
the effect of network heterogeneity on correlations in sparse, random networks
of inhibitory neurons with non-linear, conductance-based synapses. Emulations
of these networks on the analog neuromorphic hardware system Spikey allow us to
test the efficiency of decorrelation by inhibitory feedback in the presence of
hardware-specific heterogeneities. The configurability of the hardware
substrate enables us to modulate the extent of heterogeneity in a systematic
manner. We selectively study the effects of shared input and recurrent
connections on correlations in membrane potentials and spike trains. Our
results confirm ...Comment: 20 pages, 10 figures, supplement
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease
The impairment of cognitive function in Alzheimer's is clearly correlated to
synapse loss. However, the mechanisms underlying this correlation are only
poorly understood. Here, we investigate how the loss of excitatory synapses in
sparsely connected random networks of spiking excitatory and inhibitory neurons
alters their dynamical characteristics. Beyond the effects on the network's
activity statistics, we find that the loss of excitatory synapses on excitatory
neurons shifts the network dynamic towards the stable regime. The decrease in
sensitivity to small perturbations to time varying input can be considered as
an indication of a reduction of computational capacity. A full recovery of the
network performance can be achieved by firing rate homeostasis, here
implemented by an up-scaling of the remaining excitatory-excitatory synapses.
By analysing the stability of the linearized network dynamics, we explain how
homeostasis can simultaneously maintain the network's firing rate and
sensitivity to small perturbations
- …